首页> 外文OA文献 >I-Smooth for improved minimum classification error training
【2h】

I-Smooth for improved minimum classification error training

机译:I-Smooth用于改进最小分类错误训练

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Increasing the generalization capability of Discriminative Training (DT) of Hidden Markov Models (HMM) has recently gained an increased interest within the speech recognition field. In particular, achieving such increases with only minor modifications to the existing DT method is of significant practical importance. In this paper, we propose a solution for increasing the generalization capability of a widely-used training method \u2013 the Minimum Classification Error (MCE) training of HMM \u2013 with limited changes to its original framework. For this, we define boundary data \u2013 obtained by applying a large steep parameter, and confusion data \u2013 obtained by applying a small steep parameter on the training samples, and then do a soft interpolation between these according to the number points of occupancies of boundary data and the number points ratio between the boundary and the confusion occupancies. The final HMM parameters are then tuned in the same manner as in MCE by using the interpolated boundary data. We show that the proposed method achieves lower error rates than a standard HMM training framework on a phoneme classification task for the TIMIT speech corpus.
机译:隐马尔可夫模型(HMM)的判别训练(DT)的泛化能力不断增强,最近在语音识别领域引起了越来越多的兴趣。特别地,仅对现有的DT方法进行较小的修改就实现这样的增加具有重大的实践重要性。在本文中,我们提出了一种解决方案,可以在不对其原始框架进行有限更改的情况下,提高HMM的最小分类错误(MCE)训练的广泛使用的训练方法的泛化能力。为此,我们定义通过应用一个大的陡峭参数获得的边界数据\ u2013,以及通过在训练样本上应用一个小的陡峭参数获得的混淆数据\ u2013,然后根据占用点的数量在它们之间进行软插值边界数据和边界点与混淆占用之间的点数比。然后,通过使用插值边界数据,以与MCE中相同的方式调整最终的HMM参数。我们表明,在TIMIT语音语料库的音素分类任务上,所提出的方法比标准HMM训练框架实现的错误率更低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号